反事实风险最小化是通过记录数据组成的脱机策略优化的框架,该数据由上下文,动作,倾向得分和每个样本点的奖励组成。在这项工作中,我们以此框架为基础,并为未观察到某些样本的奖励的设置提出了一种学习方法,因此记录的数据由具有未知奖励的样本子集和具有已知奖励的样本子集。此设置在许多应用领域,包括广告和医疗保健。虽然某些样本缺少奖励反馈,但可以利用未知的奖励样本来最大程度地降低风险,我们将此设置称为半遇到事实风险的最小化。为了解决这种学习问题,我们在反相反分数估计器下的真实风险中得出了新的上限。然后,我们基于这些界限,提出了一种正规化的反事实风险最小化方法,该方法仅基于已记录的未知奖励数据集;因此,这是奖励独立的。我们还提出了另一种算法,该算法基于为已记录的未知奖励数据集生成伪奖励。神经网络和基准数据集的实验结果表明,除了已记录已知的奖励数据集外,这些算法可以利用已记录的未知奖励数据集。
translated by 谷歌翻译
近年来,在诸如denoing,压缩感应,介入和超分辨率等反问题中使用深度学习方法的使用取得了重大进展。尽管这种作品主要是由实践算法和实验驱动的,但它也引起了各种有趣的理论问题。在本文中,我们调查了这一作品中一些突出的理论发展,尤其是生成先验,未经训练的神经网络先验和展开算法。除了总结这些主题中的现有结果外,我们还强调了一些持续的挑战和开放问题。
translated by 谷歌翻译
我们通过专注于两个流行的转移学习方法,$ \ Alpha $ -weighted-ERM和两级eRM,提供了一种基于GIBBS的转移学习算法的泛化能力的信息 - 理论分析。我们的关键结果是使用输出假设和给定源样本的输出假设和目标训练样本之间的条件对称的KL信息进行精确表征泛化行为。我们的结果也可以应用于在这两个上述GIBBS算法上提供新的无分布泛化误差上限。我们的方法是多才多艺的,因为它还表征了渐近误差和渐近制度中这两个GIBBS算法的过度风险,它们分别收敛到$ \ alpha $ -winution-eRM和两级eRM。基于我们的理论结果,我们表明,转移学习的好处可以被视为偏差折衷,源分布引起的偏差和缺乏目标样本引起的差异。我们认为这一观点可以指导实践中转移学习算法的选择。
translated by 谷歌翻译
神经网络在广泛的任务中展示了他们出色的表现。具体地,基于长短短期存储器(LSTM)单元格的复发架构表现出了在真实数据中模拟时间依赖性的优异能力。然而,标准的经常性架构无法估计其不确定性,这对于安全关键型应用如医学,这是必不可少的。相比之下,贝叶斯经常性神经网络(RNN)能够以提高的精度提供不确定性估计。尽管如此,贝叶斯的RNN是在计算上和记忆所要求的,尽管他们的优势尽管他们的实用性限制了他们的实用性。为了解决这个问题,我们提出了一种基于FPGA的硬件设计,以加速基于贝叶斯LSTM的RNN。为了进一步提高整体算法 - 硬件性能,提出了一种共同设计框架来探索贝叶斯RNN的最适合的算法 - 硬件配置。我们对医疗保健应用进行了广泛的实验,以证明我们的设计和框架的有效性的提高。与GPU实施相比,我们的FPGA的设计可以实现高达10倍的加速,能效率较高的近106倍。据我们所知,这是第一份针对FPGA上的贝叶斯RNN的加速的工作。
translated by 谷歌翻译
神经网络(NNS)已经在广泛的应用中证明了它们的潜力,例如图像识别,决策或推荐系统。然而,标准NNS无法捕获其模型不确定性,这对于包括医疗保健和自治车辆的许多安全关键应用至关重要。相比之下,贝叶斯神经网络(BNNS)能够通过数学接地表达他们预测中的不确定性。尽管如此,BNN尚未广泛用于工业实践,主要是由于其昂贵的计算成本和有限的硬件性能。这项工作提出了一种新的基于FPGA的硬件架构,可以通过Monte Carlo辍学加速BNN推断。与其他最先进的BNN加速器相比,所提出的加速器可以达到高达4倍的能量效率和9倍的计算效率。考虑到部分贝叶斯推断,提出了一种自动框架,探讨了硬件和算法性能之间的权衡。进行广泛的实验以证明我们所提出的框架可以有效地找到设计空间中的最佳点。
translated by 谷歌翻译
In the last years, the number of IoT devices deployed has suffered an undoubted explosion, reaching the scale of billions. However, some new cybersecurity issues have appeared together with this development. Some of these issues are the deployment of unauthorized devices, malicious code modification, malware deployment, or vulnerability exploitation. This fact has motivated the requirement for new device identification mechanisms based on behavior monitoring. Besides, these solutions have recently leveraged Machine and Deep Learning techniques due to the advances in this field and the increase in processing capabilities. In contrast, attackers do not stay stalled and have developed adversarial attacks focused on context modification and ML/DL evaluation evasion applied to IoT device identification solutions. This work explores the performance of hardware behavior-based individual device identification, how it is affected by possible context- and ML/DL-focused attacks, and how its resilience can be improved using defense techniques. In this sense, it proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification. Then, previous techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices running identical software. The LSTM-CNN improves previous solutions achieving a +0.96 average F1-Score and 0.8 minimum TPR for all devices. Afterward, context- and ML/DL-focused adversarial attacks were applied against the previous model to test its robustness. A temperature-based context attack was not able to disrupt the identification. However, some ML/DL state-of-the-art evasion attacks were successful. Finally, adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks, without degrading its performance.
translated by 谷歌翻译
Cybercriminals are moving towards zero-day attacks affecting resource-constrained devices such as single-board computers (SBC). Assuming that perfect security is unrealistic, Moving Target Defense (MTD) is a promising approach to mitigate attacks by dynamically altering target attack surfaces. Still, selecting suitable MTD techniques for zero-day attacks is an open challenge. Reinforcement Learning (RL) could be an effective approach to optimize the MTD selection through trial and error, but the literature fails when i) evaluating the performance of RL and MTD solutions in real-world scenarios, ii) studying whether behavioral fingerprinting is suitable for representing SBC's states, and iii) calculating the consumption of resources in SBC. To improve these limitations, the work at hand proposes an online RL-based framework to learn the correct MTD mechanisms mitigating heterogeneous zero-day attacks in SBC. The framework considers behavioral fingerprinting to represent SBCs' states and RL to learn MTD techniques that mitigate each malicious state. It has been deployed on a real IoT crowdsensing scenario with a Raspberry Pi acting as a spectrum sensor. More in detail, the Raspberry Pi has been infected with different samples of command and control malware, rootkits, and ransomware to later select between four existing MTD techniques. A set of experiments demonstrated the suitability of the framework to learn proper MTD techniques mitigating all attacks (except a harmfulness rootkit) while consuming <1 MB of storage and utilizing <55% CPU and <80% RAM.
translated by 谷歌翻译
Uncertainty quantification is crucial to inverse problems, as it could provide decision-makers with valuable information about the inversion results. For example, seismic inversion is a notoriously ill-posed inverse problem due to the band-limited and noisy nature of seismic data. It is therefore of paramount importance to quantify the uncertainties associated to the inversion process to ease the subsequent interpretation and decision making processes. Within this framework of reference, sampling from a target posterior provides a fundamental approach to quantifying the uncertainty in seismic inversion. However, selecting appropriate prior information in a probabilistic inversion is crucial, yet non-trivial, as it influences the ability of a sampling-based inference in providing geological realism in the posterior samples. To overcome such limitations, we present a regularized variational inference framework that performs posterior inference by implicitly regularizing the Kullback-Leibler divergence loss with a CNN-based denoiser by means of the Plug-and-Play methods. We call this new algorithm Plug-and-Play Stein Variational Gradient Descent (PnP-SVGD) and demonstrate its ability in producing high-resolution, trustworthy samples representative of the subsurface structures, which we argue could be used for post-inference tasks such as reservoir modelling and history matching. To validate the proposed method, numerical tests are performed on both synthetic and field post-stack seismic data.
translated by 谷歌翻译
We present a Machine Learning (ML) study case to illustrate the challenges of clinical translation for a real-time AI-empowered echocardiography system with data of ICU patients in LMICs. Such ML case study includes data preparation, curation and labelling from 2D Ultrasound videos of 31 ICU patients in LMICs and model selection, validation and deployment of three thinner neural networks to classify apical four-chamber view. Results of the ML heuristics showed the promising implementation, validation and application of thinner networks to classify 4CV with limited datasets. We conclude this work mentioning the need for (a) datasets to improve diversity of demographics, diseases, and (b) the need of further investigations of thinner models to be run and implemented in low-cost hardware to be clinically translated in the ICU in LMICs. The code and other resources to reproduce this work are available at https://github.com/vital-ultrasound/ai-assisted-echocardiography-for-low-resource-countries.
translated by 谷歌翻译
Model estimates obtained from traditional subspace identification methods may be subject to significant variance. This elevated variance is aggravated in the cases of large models or of a limited sample size. Common solutions to reduce the effect of variance are regularized estimators, shrinkage estimators and Bayesian estimation. In the current work we investigate the latter two solutions, which have not yet been applied to subspace identification. Our experimental results show that our proposed estimators may reduce the estimation risk up to $40\%$ of that of traditional subspace methods.
translated by 谷歌翻译